Small sample size generalization

نویسنده

  • Robert P.W. Duin
چکیده

The generalization of linear classifiers is considered for training sample sizes smaller than the feature size. It is shown that there exists a good linear classifier, that is better than the Nearest Mean classifier for sample sizes for which Fisher’s linear discriminant cannot be used. The use and performance of this small sample size classifier is illustrated by some examples.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Trainable fusion rules. II. Small sample-size effects

Profound theoretical analysis is performed of small-sample properties of trainable fusion rules to determine in which situations neural network ensembles can improve or degrade classification results. We consider small sample effects, specific only to multiple classifiers system design in the two-category case of two important fusion rules: (1) linear weighted average (weighted voting), realize...

متن کامل

Learning curves from a modified VC-formalism: a case study

In this paper we present a case study of a 1-dimensional higher order neuron using a statistical approach to learning theory which incorporates some information on the distribution on the sample space and can be viewed as a modification of the Vapnik-Chervonenkis formalism (VC-formalism). We concentrate on learning curves defined as averages of the worst generalization error of binary hypothesi...

متن کامل

The analysis of very small samples of repeated measurements I: an adjusted sandwich estimator.

The statistical analysis of repeated measures or longitudinal data always requires the accommodation of the covariance structure of the repeated measurements at some stage in the analysis. The general linear mixed model is often used for such analyses, and allows for the specification of both a mean model and a covariance structure. Often the covariance structure itself is not of direct interes...

متن کامل

An Incremental Learning Algorithm That Optimizes Network Size and Sample Size in One Trial

| A constructive learning algorithm is described that builds a feedforward neural network with an optimal number of hidden units to balance convergence and generalization. The method starts with a small training set and a small network, and expands the training set incrementally after training. If the training does not converge, the network grows incrementally to increase its learning capacity....

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006